Our source code is available at https//isrc.iscas.ac.cn/gitlab/research/siamcan.It is quite laborious and costly to manually label LiDAR point cloud data for training high-quality 3D object detectors. This work proposes a weakly supervised framework which allows learning 3D detection from a few weakly annotated examples. This is achieved by a two-stage architecture design. Stage-1 learns to generate cylindrical object proposals under inaccurate and inexact supervision, obtained by our proposed BEV center-click annotation strategy, where only the horizontal object centers are click-annotated in bird's view scenes. Stage-2 learns to predict cuboids and confidence scores in a coarse-to-fine, cascade manner, under incomplete supervision, i.e., only a small portion of object cuboids are precisely annotated. With KITTI dataset, using only 500 weakly annotated scenes and 534 precisely labeled vehicle instances, our method achieves 86-97% the performance of current top-leading, fully supervised detectors (which require 3712 exhaustively annotated scenes with 15654 instances). More importantly, with our elaborately designed network architecture, our trained model can be applied as a 3D object annotator, supporting both automatic and active (human-in-the-loop) working modes. The annotations generated by our model can be used to train 3D object detectors, achieving over 95% of their original performance (with manually labeled training data).This paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or unpaired data during training. This is achieved through a set of carefully formulated non-reference losses, which implicitly measure the enhancement quality and drive the learning of the network. Despite its simplicity, it generalizes well to diverse lighting conditions. Our method is efficient as image enhancement can be achieved by a simple nonlinear curve mapping. We further present an accelerated and light version of Zero-DCE, called Zero-DCE++, that takes advantage of a tiny network with just 10K parameters. Zero-DCE++ has a fast inference speed (1000/11 FPS on single GPU/CPU) while keeping the enhancement performance of Zero-DCE. Experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods. The potential benefits of our method to face detection in the dark are discussed.Low-rank tensor recovery (LRTR) is a natural extension of low-rank matrix recovery (LRMR) to high-dimensional arrays, which aims to reconstruct an underlying tensor from incomplete linear measurements M(X). However, LRTR ignores the error caused by quantization, limiting its application when the quantization is low-level. In this work, we take into account the impact of extreme quantization and suppose the quantizer degrades into a comparator that only acquires the signs of M(X). We still hope to recover X from these binary measurements. Under the tensor Singular Value Decomposition (t-SVD) framework, two recovery methods are proposedthe first is a tensor hard singular tube thresholding method; the second is a constrained tensor nuclear norm minimization method. These methods can recover a real n1 n2 n3 tensor X with tubal rank r from m random Gaussian binary measurements with errors decaying at a polynomial speed of the oversampling factor = m/((n1+ n2)n3r). To improve the convergence rate, we develop a new quantization scheme under which the convergence rate can be accelerated to an exponential function of . Numerical experiments verify our results, and the applications to real-world data demonstrate the promising performance of the proposed methods.The task of multi-label image recognition is to predict a set of object labels that present in an image. As objects normally co-occur in an image, it is desirable to model label dependencies to improve recognition performance. To capture and explore such important information, we propose Graph Convolutional Networks based models for multi-label recognition, where directed graphs are constructed over classes and information is propagated between classes to learn inter-dependent class-level representations. Following this idea, we design two particular models that approach multi-label classification from different views. In our first model, the prior knowledge about the class dependencies is integrated into classifier learning. Specifically, we propose Classifier-Learning-GCN to map class-level semantic representations (\eg, word embedding) into classifiers that maintain the inter-class topology. In our second model, we decompose the visual representation of an image into a set of label-aware features and propose Prediction-Learning-GCN to encode such features into inter-dependent image-level prediction scores. https://www.selleckchem.com/products/ly2584702.html Furthermore, we also present an effective correlation matrix construction approach to capture inter-class relationships and consequently guide information propagation among classes. Empirical results on generic multi-label recognition demonstrate that the effectiveness of both two proposed models. Moreover, the proposed methods also show advantages in other multi-label related applications.A common challenge in nonparametric inference is its high computational complexity when data volume is large. In this paper, we develop computationally efficient nonparametric testing by employing a random projection strategy. In the specific kernel ridge regression setup, a simple distance-based test statistic is proposed. Notably, we derive the minimum number of random projections that is sufficient for achieving testing optimality in terms of the minimax rate. An adaptive testing procedure is further established without prior knowledge of regularity. One technical contribution is to establish upper bounds for a range of tail sums of empirical kernel eigenvalues. Simulations and real data analysis are conducted to support our theory.